An Optimal High-Order Tensor Method for Convex Optimization

نویسندگان

چکیده

This paper is concerned with finding an optimal algorithm for minimizing a composite convex objective function. The basic setting that the sum of two functions: first function smooth up to dth-order derivative information available, and second possibly nonsmooth, but its proximal tensor mappings can be computed approximately in efficient manner. problem find—in setting—the best possible (optimal) iteration complexity optimization. Along line, case (without nonsmooth part objective), Nesterov proposed first-order methods ([Formula: see text]) [Formula: text], whereas high-order algorithms (using general information) text] were recently established. In this paper, we propose new case, which matches lower bound as previously established hence optimal. Our approach based on accelerated hybrid extragradient (A-HPE) framework by Monteiro Svaiter, where bisection procedure installed each A-HPE iteration. At step, subproblem solved, total number steps per shown bounded logarithmic factor precision required.

برای دانلود رایگان متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

An adaptive accelerated first-order method for convex optimization

In this paper, we present a new accelerated variant of Nesterov’s method for solving a class of convex optimization problems, in which certain acceleration parameters are adaptively (and aggressively) chosen so as to: preserve the theoretical iteration-complexity of the original method, and; substantially improve its practical performance in comparison to the other existing variants. Computatio...

متن کامل

An optimal algorithm for bandit convex optimization

We consider the problem of online convex optimization against an arbitrary adversary with bandit feedback, known as bandit convex optimization. We give the first Õ( √ T )-regret algorithm for this setting based on a novel application of the ellipsoid method to online learning. This bound is known to be tight up to logarithmic factors. Our analysis introduces new tools in discrete convex geometry.

متن کامل

An optimal first-order primal-dual gap reduction framework for constrained convex optimization

We introduce an analysis framework for constructing optimal first-order primal-dual methods for the prototypical constrained convex optimization template. While this class of methods offers scalability advantages in obtaining numerical solutions, they have the disadvantage of producing sequences that are only approximately feasible to the problem constraints. As a result, it is theoretically ch...

متن کامل

An Optimal Algorithm for Bandit and Zero-Order Convex Optimization with Two-Point Feedback

We consider the closely related problems of bandit convex optimization with two-point feedback, and zero-order stochastic convex optimization with two function evaluations per round. We provide a simple algorithm and analysis which is optimal for convex Lipschitz functions. This improves on Duchi et al. (2015), which only provides an optimal result for smooth functions; Moreover, the algorithm ...

متن کامل

Convex Regularization for High-Dimensional Tensor Regression

In this paper we present a general convex optimization approach for solving highdimensional tensor regression problems under low-dimensional structural assumptions. We consider using convex and weakly decomposable regularizers assuming that the underlying tensor lies in an unknown low-dimensional subspace. Within our framework, we derive general risk bounds of the resulting estimate under fairl...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

ژورنال

عنوان ژورنال: Mathematics of Operations Research

سال: 2021

ISSN: ['0364-765X', '1526-5471']

DOI: https://doi.org/10.1287/moor.2020.1103